54 research outputs found

    Morphological tools for spatial and multiscale analysis of passive microwave remote sensing data

    Get PDF
    International audienceEarth Observation through microwave radiometry is particularly useful for various applications, e.g., soil moisture, ocean salinity, or sea ice cover. However, most of the image processing/data analysis techniques aiming to provide automatic measurement from remote sensing data do not rely on any spatial information, similarly to the early years of opti-cal/hyperspectral remote sensing. After more than a decade of research, it has been observed that spatial information can very significantly improve the accuracy of land use/land cover maps. In this context, the goal of this paper is to propose a few insights on how spatial information can benefit to (passive) microwave remote sensing. To do so, we focus here on mathematical morphology and provide some illustrative examples where morphological operators can improve the processing and analysis of microwave radiometric information. Such tools had great influence on multispectral/hyperspectral remote sensing in the past, and are expected to have a similar impact in the microwave field in the future, with the launch of upcoming missions with improved spatial resolution, e.g. SMOS-NEXT

    ADRMX: Additive Disentanglement of Domain Features with Remix Loss

    Full text link
    The common assumption that train and test sets follow similar distributions is often violated in deployment settings. Given multiple source domains, domain generalization aims to create robust models capable of generalizing to new unseen domains. To this end, most of existing studies focus on extracting domain invariant features across the available source domains in order to mitigate the effects of inter-domain distributional changes. However, this approach may limit the model's generalization capacity by relying solely on finding common features among the source domains. It overlooks the potential presence of domain-specific characteristics that could be prevalent in a subset of domains, potentially containing valuable information. In this work, a novel architecture named Additive Disentanglement of Domain Features with Remix Loss (ADRMX) is presented, which addresses this limitation by incorporating domain variant features together with the domain invariant ones using an original additive disentanglement strategy. Moreover, a new data augmentation technique is introduced to further support the generalization capacity of ADRMX, where samples from different domains are mixed within the latent space. Through extensive experiments conducted on DomainBed under fair conditions, ADRMX is shown to achieve state-of-the-art performance. Code will be made available at GitHub after the revision process

    Sabanci-Okan system at ImageClef 2011: plant identication task

    Get PDF
    We describe our participation in the plant identication task of ImageClef 2011. Our approach employs a variety of texture, shape as well as color descriptors. Due to the morphometric properties of plants, mathematical morphology has been advocated as the main methodology for texture characterization, supported by a multitude of contour-based shape and color features. We submitted a single run, where the focus has been almost exclusively on scan and scan-like images, due primarily to lack of time. Moreover, special care has been taken to obtain a fully automatic system, operating only on image data. While our photo results are low, we consider our submission successful, since besides being our rst attempt, our accuracy is the highest when considering the average of the scan and scan-like results, upon which we had concentrated our eorts

    Vector attribute profiles for hyperspectral image classification

    Get PDF
    International audienceMorphological attribute profiles are among the most prominent spectral-spatial pixel description methods. They are efficient, effective and highly customizable multi-scale tools based on hierarchical representations of a scalar input image. Their application to multivariate images in general, and hyperspectral images in particular, has been so far conducted using the marginal strategy, i.e. by processing each image band (eventually obtained through a dimension reduction technique) independently. In this paper, we investigate the alternative vector strategy, which consists in processing the available image bands simultaneously. The vector strategy is based on a vector ordering relation that leads to the computation of a single max-and min-tree per hyperspectral dataset, from which attribute profiles can then be computed as usual. We explore known vector ordering relations for constructing such max-trees and subsequently vector attribute profiles, and introduce a combination of marginal and vector strategies. We provide an experimental comparison of these approaches in the context of hyperspectral classification with common datasets, where the proposed approach outperforms the widely used marginal strategy

    Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture

    Get PDF
    Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds in order to perform selective treatments, increase yield and crop health while reducing the amount of chemicals used. Deep learning approaches have recently achieved both excellent classification performance and real-time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labelling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep-learning-based classifiers for different crop types, with the goal of reducing the retraining time and labelling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds, and compare the performance and retraining efforts required when using data labelled at pixel level with partially labelled data obtained through a less time-consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible, and reduces training times for up to 80%80\%. Furthermore, we show that even when the data used for re-training is imperfectly annotated, the classification performance is within 2%2\% of that of networks trained with laboriously annotated pixel-precision data

    Unsupervised Domain Adaptation for Semantic Segmentation using One-shot Image-to-Image Translation via Latent Representation Mixing

    Full text link
    Domain adaptation is one of the prominent strategies for handling both domain shift, that is widely encountered in large-scale land use/land cover map calculation, and the scarcity of pixel-level ground truth that is crucial for supervised semantic segmentation. Studies focusing on adversarial domain adaptation via re-styling source domain samples, commonly through generative adversarial networks, have reported varying levels of success, yet they suffer from semantic inconsistencies, visual corruptions, and often require a large number of target domain samples. In this letter, we propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images, that i) leads to semantically consistent and noise-free images, ii) operates with a single target domain sample (i.e. one-shot) and iii) at a fraction of the number of parameters required from state-of-the-art methods. More specifically an image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains, and a perceptual network module and loss function is further introduced to enforce semantic consistency. Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods. Our source code will be available at \url{https://github.com/Sarmadfismael/LRM_I2I}

    Open-set plant identification using an ensemble of deep convolutional neural networks

    Get PDF
    Open-set recognition, a challenging problem in computer vision, is concerned with identification or verification tasks where queries may belong to unknown classes. This work describes a fine-grained plant identification system consisting of an ensemble of deep convolutional neural networks within an open-set identification framework. Two wellknown deep learning architectures of VGGNet and GoogLeNet, pretrained on the object recognition dataset of ILSVRC 2012, are finetuned using the plant dataset of LifeCLEF 2015. Moreover, GoogLeNet is fine-tuned using plant and non-plant images for rejecting samples from non-plant classes. Our systems have been evaluated on the test dataset of PlantCLEF 2016 by the campaign organizers and our best proposed model has achieved an official score of 0.738 in terms of the mean average precision, while the best official score is 0.742

    Satellite image retrieval with pattern spectra descriptors

    Get PDF
    International audienceThe increasing volume of Earth Observation data calls for appropriate solutions in satellite image retrieval. We address this problem by considering morphological descrip-tors called pattern spectra. Such descriptors are histogram-like structures that contain the information on the distribution of predefined properties (attributes) of image components. They can be computed both at the local and global scale, and are computationally attractive. We demonstrate how they can be embedded in an image retrieval framework and report their promising performances when dealing with a standard satellite image dataset

    Local Feature-Based Attribute Profiles for Optical Remote Sensing Image Classification

    Get PDF
    International audienceThis article introduces an extension of morphological attribute profiles (APs) by extracting their local features. The so-called local feature-based attribute profiles (LFAPs) are expected to provide a better characterization of each APs' filtered pixel (i.e. APs' sample) within its neighborhood, hence better deal with local texture information from the image content. In this work, LFAPs are constructed by extracting some simple first-order statistical features of the local patch around each APs' sample such as mean, standard deviation, range, etc. Then, the final feature vector characterizing each image pixel is formed by combining all local features extracted from APs of that pixel. In addition, since the self-dual attribute profiles (SDAPs) has been proved to outperform the APs in recent years, a similar process will be applied to form the local feature-based SDAPs (LFSDAPs). In order to evaluate the effectiveness of LFAPs and LFSDAPs, supervised classification using both the Random Forest and the Support Vector Machine classifiers is performed on the very high resolution Reykjavik image as well as the hyperspectral Pavia University data. Experimental results show that LFAPs (resp. LFSDAPs) can considerably improve the classification accuracy of the standard APs (resp. SDAPs) and the recently proposed histogram-based APs (HAPs)
    • …
    corecore